AoFE Research㉟ Updating the Revised Classroom Practices Survey
Future Education Research
The Academy of Future Education is dedicated to transforming and innovating education by studying the key challenges of global education reform today and proposing ideas and pathways to promote education reform.
The Academy's main research areas are: lifelong education and teacher development, child development and family education, student-centred educational philosophy and practice, management and leadership for the future of education, and technology-driven changes in education for the future. The Academy organizes PhD in Education and is committed to training researchers in the future of education.
Updating the Revised Classroom Practices Survey: An Instrument to measure teachers' classroom practices
Publication Information:
Pereira, N., Tay, J., Desmet, O., Maeda, Y. & Gentry, M. (2021). Validity evidence for the Revised Classroom Practices Survey: An Instrument to measure teachers' differentiation practices. Journal for the Education of the Gifted, 44, 31-55. https://doi.org/10.1177/0162353220978304
1. Future trends, challenges and problems in education
Teaching practices have changed significantly during the past several decades to include increased awareness of the diversity in classrooms (Tomlinson, 2014). In today’s classrooms, students come from diverse cultural, racial, and/or learning needs, and interests. Additionally, these students often have a wide range of readiness and achievement levels, which create a challenge for most classroom teachers. Although teachers can use differentiation strategies to meet the needs of their students, a gap remains in terms of teachers’ knowledge and application of such strategies within the classrooms. With growing attention to diversity and emphasis on differentiated instruction, there is increasing demand to learn what actually happens in classrooms. Various assessment tools, observation protocols and self-assessment instruments, have been developed for the purpose of collecting information on classroom practices. Nevertheless, few instruments exist for measuring classroom/teaching practices with well-established evidence for reliability and validity of their data (Walkington et al., 2011).
Considering the importance of being able to gather information on classroom practices for evaluation purposes, this lack of psychometrically sound instruments for this purpose is concerning. A need exists for instruments that can yield valid and reliable data to help teachers reflect on their classroom practices relating to differentiation. The Classroom Practices Survey (CPS: Archambault et al., 1993) has been used by researchers and practitioners to gather information about classroom practices. However, some items on the survey are dated, and our previous research (Pereira et al., 2019) identified the need for improvement of the validity evidence of CPS. Therefore, the aim of the study was to update the CPS to fill this gap in the literature. Scale revision that involves the exclusion of the original items and wording changes in items requires revalidation of the instrument because the modifications might affect the interrelationship of existing items.
2. Methods and Research questions
CPS was originally developed to assess the extent to which teachers differentiate instruction for gifted and talented students in their regular classrooms (Archambault et al., 1993). The original CPS contained 39 Likert-type items with six options (0 = never to 5 = more than once a day) to measure the frequency with which teachers used a variety of differentiation strategies (i.e., classroom practices) with average- and high-achieving students. The six subscales in CPS were (a) Questioning and thinking (QT), (b) Providing challenges and choices (CC), (c) Reading and writing assignments (RW), (d) Curriculum modifications (CM), (e) Enrichment centers (EC), and (f) Seatwork (SW). The goal of the current study was to examine and evaluate psychometric properties of the 2017 revision of CPS (hereafter referred to as CPS-R) to assess teaching practices employed by teachers for students at different achievement levels (i.e., low-, average-, and high-achieving students). Thus, we have developed the following research questions.
Research Question 1: To what extent does CPS-R yield reliable data that can be used to measure teachers’ differentiation practices for students with low, average, and high achievement levels? Do the four CPS-R subscales (i.e., QT, CC, RW, and CM) yield acceptable reliability indices across the three achievement levels?
Research Question 2: To what extent does CPS-R yield valid data that can be used to measure teachers’ differentiation practices for students with low, average, or high achievement levels? Do the four CPS-R subscales yield valid data across the three achievement levels?
We completed two rounds of revisions on the original CPS (Archambault et al., 1993) as a preliminary study. First, based on the findings from a previous study (Pereira et al., 2019), the original CPS was revised by changing the verbiage to be more parsimonious and aligned with the structure of other CPS items. We then conducted a confirmatory factor analysis (CFA) on the data to evaluate the structural construct validity of the revised instrument. The results of the CFA did not meet the criteria for adequate model fit, so a second round of revisions was completed with special attention to increasing reliability of the CM subscale, which had lower internal consistency estimates compared with other constructs measured. Based on the revisions, we added items for all four subscales, and in particular, for the CM subscale. We created these new items based on current recommended research-based instructional practices, such as NAGC programming standards (NAGC, 2010) and research-based practices described in recently published books on differentiation practices (e.g., Gentry et al., 2014; Tomlinson, 2014).
Next, we used CPS-R to collect data on teachers’ use of differentiation strategies. A total of 67 schools from 20 school districts in seven states in the US participated in the project. These schools are part of a larger research project aimed to examine the implementation of grouping and differentiation strategies used in elementary classrooms by teachers for all students. Teachers in these schools were given access to an online module that provided them with information about differentiation strategies. They also participated in the process of placing students in one of the five achievement categories—low, low average, average, above average, or high achieving—following specific guidelines (Gentry et al., 2014). For example, teachers working with high-achieving students did not have any low-achieving students in their classrooms. Teachers completed CPS-R on all achievement groups in their classrooms, so most teachers completed CPS-R on at least two achievement groups (e.g., average- and high-achieving students).
We collected CPS-R data from a total of 739 teachers, and after using a listwise deletion method of handling missing responses on the scale, the sample sizes for the low-, average-, and high-achieving groups were 707 (405 in treatment schools and 302 in delayed-treatment schools), 658 (375 in treatment schools and 283 in delayed-treatment schools), and 661 (377 in treatment schools and 284 in delayed-treatment schools), respectively.
3. Main findings, contributions, and implications
In this study, we presented the process for revising the CPS (Archambault et al., 1993), which had not undergone additional evaluation or revisions. Psychometric evidence provided by our results supports the conclusion that CPS-R is a good option for collecting information on teachers’ use of differentiation strategies as it is more up-to-date and has evidence of internal consistency compared with other similar instruments. Internal consistency is a prerequisite to evaluate how well measurement models are supported by data.
However, it is also crucial to evaluate validity evidence, in particular construct-related validity, such as model fit to data, in the process of scale validation (Schmitt, 1996). Results show that model fit for CPS-R is generally in the acceptable range. Although the revised instrument is an improvement from the original CPS (Archambault et al., 1993) and the most recent version of CPS (Pereira et al., 2019), the construct-related validity evidence for CPS-R did not reach an optimal level, which leaves us several considerations about the instrument. One consideration relates to the gap between how differentiation has been defined in research and practice (including educational standards) and how it is enacted in current classrooms. Differentiated instruction has become a common practice in most classrooms (Coubergs et al., 2017; OECD, 2016). However, the conceptual understanding of what differentiation entails is evolving in education (Tomlinson & Jarvis, 2009), and differentiation practices in the classroom are no longer limited to students’ readiness levels, learning needs, and interests. These continuing changes in what differentiated instruction entails affect how to measure constructs related to differentiation. The limited literature base on all specific aspects of differentiation might have contributed to the challenge in developing an instrument that possesses strong construct-related validity.
Another consideration is related to the potential disconnect between research and practice. Not all differentiation practices recommended by experts are captured in CPS-R. Differentiation and classrooms have significantly changed since the 1990s when CPS was originally developed. In the revision process, we reviewed the literature on differentiated instruction to reflect the most recent practices and knowledge on differentiation for item development. However, limited research exists on using differentiation as an approach to curriculum and instruction in all students rather than for students who are struggling or for those with gifts and talents.
Nevertheless, CPS-R has practical value for teachers to understand and reflect on their own classroom practices. CPS-R can also be used by administrators and researchers interested in understanding how often educators use specific differentiation practices in their classrooms or how teachers’ differentiation practices interact with students’ performance. However, we emphasize that CPS-R should not be used to make high-stakes decisions regarding teachers or their classrooms as that is not the purpose of the instrument.
4. Introduction to the researcher
Dr Juliana Tay
Dr Juliana Tay holds a PhD in Educational Studies (Gifted Education) from Purdue University. She has over a decade of researching and teaching high-ability children from all over the world. Her research interests include issues in identification for giftedness and evaluation of gifted programs. She is currently working on research in digital game-based learning and the use of adaptive learning tools in designing upskilling programs. She is also available to supervise PhD students.
Social media editor: Xiaoyan Jin
▼戳阅读原文